967 research outputs found

    Properties of pedestrians walking in line without density constraint

    Full text link
    This article deals with the study of pedestrian behaviour in one-dimensional traffic situations. We asked participants to walk either in a straight line with a fast or slow leader, or to form a circle, without ever forcing the conditions of density. While the observed density results from individual decisions in the line case, both density and velocity have to be collectively chosen in the case of circle formation. In the latter case, interestingly, one finds that the resulting velocity is very stable among realizations, as if collective decision was playing the role of an average. In the line experiment, though participants could choose comfortable headways, they rather stick to short headways requiring a faster adaption - a fact that could come from a ``social pressure from behind''. For flows close to the jamming transition, the same operating point is chosen as in previous experiments where it was not velocity but density that was imposed. All these results show that the walking values preferred by humans in following tasks depend on more factors than previously considered.Comment: Main paper (11 pages, 13 figures) + Suppl. Mat. (8 pages, 9 figures

    The influence of infant-directed speech on 12-month-olds' intersensory perception of fluent speech

    Get PDF
    The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio--visual fluent speech in 12-month-old infants. German-learning infants\^a audio--visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing

    Toward "Pseudo-Haptic Avatars": Modifying the Visual Animation of Self-Avatar Can Simulate the Perception of Weight Lifting

    Get PDF
    International audienceIn this paper we study how the visual animation of a self-avatar can be artificially modified in real-time in order to generate different haptic perceptions. In our experimental setup, participants could watch their self-avatar in a virtual environment in mirror mode while performing a weight lifting task. Users could map their gestures on the self-animated avatar in real-time using a Kinect. We introduce three kinds of modification of the visual animation of the self-avatar according to the effort delivered by the virtual avatar: 1) changes on the spatial mapping between the user's gestures and the avatar, 2) different motion profiles of the animation, and 3) changes in the posture of the avatar (upper-body inclination). The experimental task consisted of a weight lifting task in which participants had to order four virtual dumbbells according to their virtual weight. The user had to lift each virtual dumbbells by means of a tangible stick, the animation of the avatar was modulated according to the virtual weight of the dumbbell. The results showed that the altering the spatial mapping delivered the best performance. Nevertheless, participants globally appreciated all the different visual effects. Our results pave the way to the exploitation of such novel techniques in various VR applications such as sport training, exercise games, or industrial training scenarios in single or collaborative mode

    Gaze Behaviour During Collision Avoidance Between Walkers: A Preliminary Study to Design an Experimental Platform

    Get PDF
    International audienceWhen walking, vision is the main source of information that allows us to navigate safely by detecting potential collisions with other walkers. In order to gain a better understanding of the relationship between gaze activity and kinematics of motion during pedestrian interactions, we present in this paper a preliminary study towards designing a more comprehensive experimental platform. In this study, participants are asked to avoid collisions with an upcoming virtual character using a joystick, while we measure their gaze behaviours using an eye-tracker. As we are interested in the effects of potential collisions on gaze activity, i.e., where and when participants look to avoid potential future collisions, we display in our experiment a virtual character for which we vary the initial Time To Closest Approach (ttca) and Distance of Closest Approach (dca) values, to change its risk of collision with our participant. We then measure participant trajectory adjustments and gaze activity during the interaction. Our preliminary results show which type of data this platform produces, and demonstrate the interest of designing more comprehensive experiences and tools to analyze both gaze activity and kinematics

    Experimental Study of Collective Pedestrian Dynamics

    Get PDF
    We report on two series of experiments, conducted in the frame of two different collaborations designed to study how pedestrians adapt their trajectories and velocities in groups or crowds. Strong emphasis is put on the motivations for the chosen protocols and the experimental implementation. The first series deals with pattern formation, interactions between pedestrians, and decision-making in pedestrian groups at low to medium densities. In particular, we show how pedestrians adapt their headways in single-file motion depending on the (prescribed) leader’s velocity. The second series of experiments focuses on static crowds at higher densities, a situation that can be critical in real life and in which the pedestrians’ choices of motion are strongly constrained sterically. More precisely, we study the crowd’s response to its crossing by a pedestrian or a cylindrical obstacle of 74cm in diameter. In the latter case, for a moderately dense crowd, we observe displacements that quickly decay with the minimal distance to the obstacle, over a lengthscale of the order of the meter

    Interaction between real and virtual humans during walking: perceptual evaluation of a simple device

    Get PDF
    International audienceValidating that a real user can correctly perceive the motion of a virtual human is first required to enable realistic interactions between real and virtual humans during navigation tasks through virtual reality equipment. In this paper we focus on collision avoidance tasks. Previous works stated that real humans are able to accurately estimate others' motion and to avoid collisions with anticipation. Our main contribution is to propose a perceptual evaluation of a simple virtual reality system. The goal is to assess whether real humans are also able to accurately estimate a virtual human motion before collision avoidance. Results show that, even through a simple system, users are able to correctly evaluate the situation of an interaction on the qualitative point of view. Especially, in comparison with real interactions, users accurately decide whether they should give way to the virtual human or not. However, on the quantitative point of view, it is not easy for users to determine whether they will collide with virtual humans or not. On one hand, deciding to give way or not is a two-choice problem. On the other hand, detecting future collision requires to determine whether some visual variables belong some interval or not. We discuss this problem in terms of bearing angle

    Correspondence-free online human motion retargeting

    Get PDF
    We present a novel data-driven framework for unsupervised human motion retargeting which animates a target body shape with a source motion. This allows to retarget motions between different characters by animating a target subject with a motion of a source subject. Our method is correspondence-free, i.e. neither spatial correspondences between the source and target shapes nor temporal correspondences between different frames of the source motion are required. Our proposed method directly animates a target shape with arbitrary sequences of humans in motion, possibly captured using 4D acquisition platforms or consumer devices. Our framework takes into account longterm temporal context of 1 second during retargeting while accounting for surface details. To achieve this, we take inspiration from two lines of existing work: skeletal motion retargeting, which leverages long-term temporal context at the cost of surface detail, and surface-based retargeting, which preserves surface details without considering longterm temporal context. We unify the advantages of these works by combining a learnt skinning field with a skeletal retargeting approach. During inference, our method runs online, i.e. the input can be processed in a serial way, and retargeting is performed in a single forward pass per frame. Experiments show that including long-term temporal context during training improves the method's accuracy both in terms of the retargeted skeletal motion and the detail preservation. Furthermore, our method generalizes well on unobserved motions and body shapes. We demonstrate that the proposed framework achieves state-of-the-art results on two test datasets
    • …
    corecore